Multi-Armed Bandits for <i>Minesweeper</i>: Profiting From Exploration–Exploitation Synergy
نویسندگان
چکیده
A popular computer puzzle, the game of Minesweeper requires its human players to have a mix both luck and strategy succeed. Analyzing these aspects more formally, in our research, we assessed feasibility novel methodology based on reinforcement learning as an adequate approach tackle problem presented by this game. For purpose, employed multi-armed bandit algorithms which were carefully adapted order enable their use define autonomous computational players, targeting make best some peculiarities. After experimental evaluation, results showed that was indeed successful, especially smaller boards, such standard beginner level. Despite fact, main contribution work is detailed examination from perspective, led various original insights are thoroughly discussed.
منابع مشابه
Contextual Multi-Armed Bandits
We study contextual multi-armed bandit problems where the context comes from a metric space and the payoff satisfies a Lipschitz condition with respect to the metric. Abstractly, a contextual multi-armed bandit problem models a situation where, in a sequence of independent trials, an online algorithm chooses, based on a given context (side information), an action from a set of possible actions ...
متن کاملStaged Multi-armed Bandits
In conventional multi-armed bandits (MAB) and other reinforcement learning methods, the learner sequentially chooses actions and obtains a reward (which can be possibly missing, delayed or erroneous) after each taken action. This reward is then used by the learner to improve its future decisions. However, in numerous applications, ranging from personalized patient treatment to personalized web-...
متن کاملMortal Multi-Armed Bandits
We formulate and study a new variant of the k-armed bandit problem, motivated by e-commerce applications. In our model, arms have (stochastic) lifetime after which they expire. In this setting an algorithm needs to continuously explore new arms, in contrast to the standard k-armed bandit model in which arms are available indefinitely and exploration is reduced once an optimal arm is identified ...
متن کاملRegional Multi-Armed Bandits
We consider a variant of the classic multiarmed bandit problem where the expected reward of each arm is a function of an unknown parameter. The arms are divided into different groups, each of which has a common parameter. Therefore, when the player selects an arm at each time slot, information of other arms in the same group is also revealed. This regional bandit model naturally bridges the non...
متن کاملMulti-Objective X -Armed Bandits
Many of the standard optimization algorithms focus on optimizing a single, scalar feedback signal. However, real-life optimization problems often require a simultaneous optimization of more than one objective. In this paper, we propose a multi-objective extension to the standard X -armed bandit problem. As the feedback signal is now vector-valued, the goal of the agent is to sample actions in t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE transactions on games
سال: 2022
ISSN: ['2475-1502', '2475-1510']
DOI: https://doi.org/10.1109/tg.2021.3082909